Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Fix for wrong reqs set after switching from training to inference #16553

Merged
merged 3 commits into from
Oct 24, 2019

Conversation

ptrendx
Copy link
Member

@ptrendx ptrendx commented Oct 20, 2019

Description

StaticAllocMemory in CachedOp used storage_inplace_index attribute (generated by MXPlanMemory graph pass) to assign reqs for edges in the graph. However, the MXPlanMemory pass is called only once per the type of memory plan (full, forward or backward). While the memory plan itself is stored in separate attributes in the graph (forward_mem_plan, full_mem_plan etc.), the storage_inplace_index (and so the reqs) were overwritten by the last called MXPlanMemory.

The following code:

with mx.autograd.record():
   result = net(x)
result.backward()

result2 = net(x)

with mx.autograd.record():
   result3 = net(x)
result3.backward()

calls first the plan memory for the full graph and then for just the forward graph. The third invocation to net does not invoke a plan memory pass. Let us assume that inside net is an op that produces the output needed for the backward pass only when needed (req not set to kNullOp). Since the reqs are overwritten by the second net invocation, that output's req is set to kNullOp (because there is no backward pass there). Then the 3rd invocation to net does not change the req value and so the op does not produces the required output - result3 gradient is therefore produced using the stale values and so is wrong.

This PR fixes it by changing the StaticAllocMemory to use per mem plan values to assign reqs (storage_inplace_index_forward etc.) to keep the benefits of caching (MXPlanMemory called once per type) while ensuring correctness.

@eric-haibin-lin

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)

Copy link
Member

@szha szha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@szha
Copy link
Member

szha commented Oct 21, 2019

As follow-up:

  1. ideally we should be able to test for this explicitly. given that we lack the utility for such testing, I want to avoid blocking this fix on this basis
  2. the keys/magic strings should ideally be centralized, maybe in an enum.

@ptrendx
Copy link
Member Author

ptrendx commented Oct 21, 2019

For 1: this error was found when working on pointwise fusion, which uses reqs to compile the right code. I don't know if there is any other operator which may not write to an output if there is no backward pass for it (I know mx.sym.contrib.box_nms would have that behavior, but the current implementation writes to the output only used for backward every time, without looking at the req value). I can add the test in pointwise fusion PR, show that it fails and then merge with master once this fix is in to show that it does not fail anymore.

For 2: I agree, probably as const static members of CachedOp?

@szha
Copy link
Member

szha commented Oct 21, 2019

sounds good to both

Copy link
Member

@eric-haibin-lin eric-haibin-lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix! If i understand it correctly, this means hybridize(static_alloc=True) has been problematic for a while, instead of being a recent regression?

@ptrendx
Copy link
Member Author

ptrendx commented Oct 21, 2019

@eric-haibin-lin Yes, it had this bug since the beginning I believe, but I don't think there were any ops that would expose it until now.

ptrendx added a commit to ptrendx/mxnet that referenced this pull request Oct 23, 2019
@ptrendx
Copy link
Member Author

ptrendx commented Oct 23, 2019

Ok, I added the test to #15167 that should fail because of this issue. I will move the string constants to const static members of cached op, merge this PR and then merge master to #15167 to show it fixes the problem.

@ptrendx
Copy link
Member Author

ptrendx commented Oct 23, 2019

As expected, test failed in the new test here: http://jenkins.mxnet-ci.amazon-ml.com/job/mxnet-validation/job/centos-gpu/job/PR-15167/50/display/redirect

@ptrendx ptrendx merged commit 9c99bf2 into apache:master Oct 24, 2019
pengzhao-intel pushed a commit that referenced this pull request Oct 28, 2019
* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)
pengzhao-intel pushed a commit that referenced this pull request Oct 28, 2019
* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* Disables test_bulking_operator_gpu due to flakiness (#16611)

* C Api for simplebind, fix comment for trigoops, add atol to assert (#16585)

* C Api for simplebind, fix comment for trigoops, add atol to assert

* fix build issues

* fix lint and add regression test

* fix indent

* api doc and function name change

* fix lint and add infer shape test

* Imagenet inference to nightly fix (#16599)

* split to cd and shell

* comment

* lots of prints

* copy binary at correct location

* remove comments

* add mkl lib

* update docker run build function

* set nvidia docker true to run imagenet inference on GPU

* Revert "set nvidia docker true to run imagenet inference on GPU"

This reverts commit 98f8eef.
As we don't need GPU for compilation.

* Fix python doc build issue (#16630)

* pin the pip versions

* remove nbconvert comment

* Faster general take (#16615)

* Sped up perf of take op when axis != 0

* Formatting and syntax fixes

* Rename Take to specify axis

* Fix line length lint errors

* [Gluon] Don't serialize shared parameters twice (#16582)

Add deduplicate argument (default of False) to save_parameters.

* Fix index overflow bug in einsum (#16589)

* fix index overflow

* check index overflow

* fix index overflow in einsum path

* fix indent

* reduce NPY_MAXARGS

* safe accumulate

* Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)

* Move subgraph pass log to verbose=2

* Run CI

* add npx reshape (#16640)

* RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)

* fix bad encode (#16641)

* [Perl] - ndarray to native array conversion fix (#16635)

* fixing broken links in multiple files - round 3 (#16634)

* add type switch to weight tensor (#16543)

* numpy doc enhancement (#16637)

* Change NDArray to ndarray for npx ops

Add nonzero

boolean mask supports boolean ndarray

Add argmin op and interoperability test for nonzero

Fix vdot, inner, outter docs

Add nonzero to mx.nd.np

Add docs

Fix

* Fix lint

* Fix

* Fix

* Fix get_constant

* Disable float16 test (#16643)

* Fix GetMKLDNNData for delay alloc (#16618)

* Fix GetMKLDNNData for delay alloc

* Run CI

* Run CI

* Run CI

* Run CI

* Run CI

Change-Id: I7ac2796e0ee8439c92fd2bd7a70a23a359b76b12

* Revert "[mkldnn-1.0]Rebase to master (#16648)"

This reverts commit dea3dd2.
pengzhao-intel pushed a commit that referenced this pull request Oct 31, 2019
* [mkldnn-v1.0] Initiate the transition to MKL-DNN v1.0 (#15706)

* update mkldnn to 1.0.1 release

* change makefile

* change cmake

* update ci build and pip package build

* fix typo in mkldnn.mk

* fix build for USE_BLAS=mkl & bump MKL version

* skip mkldnn unit tests

* remove iomp5 from mx_mkldnn_lib

* ci: skip test_mkldnn_install

* retrigger ci

* retrigger ci

* retrigger ci

* [mkldnn-v1.0] Update MKL-DNN to v1.0.2 (#16012)

* bump mkldnn to v1.0.2

* skip quantization unit test

* add useless build flag

* Fixes openblas installation for static build

* empty commit

* [mkldnn-v1.0] Enable base code with new APIs. (#16064)

* fix comments (#8)

* add base code for mkldnn 1.0

* fix comments

* Update mkldnn.mk

* add base code for mkldnn 1.0

* fix build

* fix lint

* fix lint

* [mkldnn-v1.0] Add MKL-DNN Convolution (#16141)

* add mkldnn conv

* revert unnecessary change

* fix testcase fail for cpu: test_convolution_independent_gradients

* fix failed testcase: test_reshape_transpose_6d&&test_weight_async_reorder

* fix comments

* change variable name from weights to weight in mkldnn_conv

* [mkldnn-v1.0] Add MKL-DNN activation (#16195)

* add mkldnn act; pass lint; pass mnist training

* make bwd as private member

* [mkldnn-v1.0] Add MKL-DNN BN (#16199)

* add mkldnn bn

* add static_cast to transform data type

* change mkldnn_args_map_t

* retrigger CI

* add mkldnn lrn (#16223)

* [mkldnn-v1.0] Add MKL-DNN Transpose (#16250)

* add mkldnn transpose

* using mkldnn_args_map_t instead of std::unordered_map<int, mkldnn::memory>

* [mkldnn-v1.0] Add MKL-DNN softmax (#16246)

* add mkldnn softmax

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN FC (#16221)

* add mkldnn fc; pass lint; pass mnist training

* add TODO info for future debug

* [mkldnn-v1.0] Add MKL-DNN  deconv (#16259)

* add mkldnn deconv

* coding style

* trigger CI

* add mkldnn softmax_output (#16222)

* [mkldnn-v1.0] Add MKL-DNN Pooling (#16272)

* add mkldnn pooling

* add workaround for mkldnn v1.0 pooling fwd && bwd workspace mismatch

* code clean

* fix lint error

* trigger CI

* trigger CI

* add extra work_space check and fix some typo

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN reshape&flatten&expand_dims (#16258)

* Add mkldnn 1.0 support for reshape/flatten/expanddims ops

* improve log & modify definition location of args_map_

* fix comments

* rebase code

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN int8 activation&pooling&flatten (#16425)

* Add mkldnn quantized activation/pooling/flatten

* int8 flatten

* [mkldnn-1.0] int8 conv quantize dequantize requantize (#16283)

* int8 conv quantize dequantize requantize

Change-Id: Ibd9df97288a95c61d6d85ec3831fd18b626ca283

* Fix lint

* Fix clang build

Change-Id: I9468774d014c852901e4cc3bffabd8a3d8004519

* add mkldnn sum concat (#16263)

* [mkldnn-1.0] mkldnn int8 elemwise_add (#16454)

* add mkldnn int8 elemwise_add

* add workaround to fix format any issue

* code clean

* upgrade int8 bn to MKLDNN1.0 (#16458)

* [mkldnn-v1.0] Fused RNN Op (#16420)

* [mkldnn-v1.0] Add MKL-DNN int8 fc (#16457)

* Add mkldnn_v1.0 int8 fc

* trigger CI

* trigger CI

* [mkldnn-v1.0] Update enabling flag for MKL dropout (#16433)

* use MSHADOW_USE_MKL to determine whther to use mkl optimized dropout

* rebase code

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0 (#16466)

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0

* fix lint

* use mkldnn_args_map_t

* update dict usage style

* retrigger CI

* retrigger CI again

* retrigger CI again 2

* [mkldnn-v1.0] Add MKL-DNN slice (#16484)

* change slice to mkldnn v1.0

* fix lint

* [mkldnn-1.0] add mkldnn subgraph fc (#16468)

* add mkldnn subgraph fc

* code clean

* trigger CI

* [mkldnn-v1.0]enable mkldnn concat (#16507)

* enable mkldnn concat

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable mkldnn cpp-test, copy op, concat op (#16503)

* [mkldnn-v1.0] Enable mkldnn test, copy op, concat op

Exclude gpu topology via MXNET_USE_CUDA

nit

default format

Remove whitespace

* Unix-GPU Tensor-RT build timeout, re-trigger CI

* [mkldnn-1.0] add skipped case for mkldnn_v1.0 (#16470)

* add skipped case for mkldnn_v1.0

* enable mkl quantized testcase

* enable skipped testcase

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-1.0]enable mkldnn elemwise_sum (#16521)

* enable mkldnn elemwise_sum

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable more checks for MXNET_USE_MKLDNN (#16520)

* open USE_MKLDNN check

* trigger ci

* ci

* [mkldnn-v1.0]Minor fix for leakyrelu compile flag (#16519)

* change to MXNET_USE_MKLDNN == 100

* trigger

* remove MKL license (#16534)

* change MXNET_USE_MKLDNN from 100 to 1 (#16551)

* re-enable unit tests (#16565)

* [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu (#16545)

Skip `test_rnnrelu_sym`, and add some issue tracking message

Add return

Revert test_rnnrelu_sym to origin

* Add some annotations and log strings, rename mem_desc variables (#16609)

* [mkldnn-v1.0]set fc weight layout as mkldnn v0.2x did (#16593)

* set fc weight layout as mkldnn v0.2x did

* fix lint

* [mkldnn-v1.0] Upgrade to MKL-DNN v1.0.4 patch release (#16592)

* upgrade to mkldnn v1.0.3 patch release

* retrigger ci

* mkldnn v1.0.4 patch release

* [mkldnn-1.0]Rebase to master (#16648)

* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* [mkldnn-v1.0]rebase with master (#16649)

* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* Disables test_bulking_operator_gpu due to flakiness (#16611)

* C Api for simplebind, fix comment for trigoops, add atol to assert (#16585)

* C Api for simplebind, fix comment for trigoops, add atol to assert

* fix build issues

* fix lint and add regression test

* fix indent

* api doc and function name change

* fix lint and add infer shape test

* Imagenet inference to nightly fix (#16599)

* split to cd and shell

* comment

* lots of prints

* copy binary at correct location

* remove comments

* add mkl lib

* update docker run build function

* set nvidia docker true to run imagenet inference on GPU

* Revert "set nvidia docker true to run imagenet inference on GPU"

This reverts commit 98f8eef.
As we don't need GPU for compilation.

* Fix python doc build issue (#16630)

* pin the pip versions

* remove nbconvert comment

* Faster general take (#16615)

* Sped up perf of take op when axis != 0

* Formatting and syntax fixes

* Rename Take to specify axis

* Fix line length lint errors

* [Gluon] Don't serialize shared parameters twice (#16582)

Add deduplicate argument (default of False) to save_parameters.

* Fix index overflow bug in einsum (#16589)

* fix index overflow

* check index overflow

* fix index overflow in einsum path

* fix indent

* reduce NPY_MAXARGS

* safe accumulate

* Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)

* Move subgraph pass log to verbose=2

* Run CI

* add npx reshape (#16640)

* RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)

* fix bad encode (#16641)

* [Perl] - ndarray to native array conversion fix (#16635)

* fixing broken links in multiple files - round 3 (#16634)

* add type switch to weight tensor (#16543)

* numpy doc enhancement (#16637)

* Change NDArray to ndarray for npx ops

Add nonzero

boolean mask supports boolean ndarray

Add argmin op and interoperability test for nonzero

Fix vdot, inner, outter docs

Add nonzero to mx.nd.np

Add docs

Fix

* Fix lint

* Fix

* Fix

* Fix get_constant

* Disable float16 test (#16643)

* Fix GetMKLDNNData for delay alloc (#16618)

* Fix GetMKLDNNData for delay alloc

* Run CI

* Run CI

* Run CI

* Run CI

* Run CI

Change-Id: I7ac2796e0ee8439c92fd2bd7a70a23a359b76b12

* Revert "[mkldnn-1.0]Rebase to master (#16648)"

This reverts commit dea3dd2.

* [mkldnn-v1.0] Minor fix of mkldnn-v1.0 transition (#16644)

mk and rm directory in mkldnn.mk

ndarray.cc redundant whitespace

mkldnn_act rename variables of bwd primitives

mkldnn_rnn.cc iterator -> const_iterator

Use != instead of < for iterator in for-loop

Code comment for explaining the reason why excludes the last layer

* [mkldnn-v1.0]rm int8 sum workaround (#16623)

* rm int8 sum workaround due to mkldnn lib update

* simple dims asignments in mkldnn_quantized_elemwise_add.cc

* make MKLDNN macro simple for imperative_utils.h (#16652)

* fix ci jenkins step groovy (#16659)

* Adopt autograd.record() context to RNNOp (#16657)

* Use memcopy instead of set_handle when num_layer=0, direction=1 (#16663)

* fallback mkldnn fc bwd in imperative mode (#16672)

* disable MKLDNN FC backward

* [mkldnn-v1.0] Must reorder and emplace weights for inference primitives (#16682)

* add default parameter for mkldnn rnn
DickJC123 pushed a commit that referenced this pull request Nov 1, 2019
* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR #16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file
apeforest pushed a commit that referenced this pull request Nov 6, 2019
* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR #16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file
yajiedesign pushed a commit to yajiedesign/mxnet that referenced this pull request Nov 6, 2019
* [mkldnn-v1.0] Initiate the transition to MKL-DNN v1.0 (apache#15706)

* update mkldnn to 1.0.1 release

* change makefile

* change cmake

* update ci build and pip package build

* fix typo in mkldnn.mk

* fix build for USE_BLAS=mkl & bump MKL version

* skip mkldnn unit tests

* remove iomp5 from mx_mkldnn_lib

* ci: skip test_mkldnn_install

* retrigger ci

* retrigger ci

* retrigger ci

* [mkldnn-v1.0] Update MKL-DNN to v1.0.2 (apache#16012)

* bump mkldnn to v1.0.2

* skip quantization unit test

* add useless build flag

* Fixes openblas installation for static build

* empty commit

* [mkldnn-v1.0] Enable base code with new APIs. (apache#16064)

* fix comments (#8)

* add base code for mkldnn 1.0

* fix comments

* Update mkldnn.mk

* add base code for mkldnn 1.0

* fix build

* fix lint

* fix lint

* [mkldnn-v1.0] Add MKL-DNN Convolution (apache#16141)

* add mkldnn conv

* revert unnecessary change

* fix testcase fail for cpu: test_convolution_independent_gradients

* fix failed testcase: test_reshape_transpose_6d&&test_weight_async_reorder

* fix comments

* change variable name from weights to weight in mkldnn_conv

* [mkldnn-v1.0] Add MKL-DNN activation (apache#16195)

* add mkldnn act; pass lint; pass mnist training

* make bwd as private member

* [mkldnn-v1.0] Add MKL-DNN BN (apache#16199)

* add mkldnn bn

* add static_cast to transform data type

* change mkldnn_args_map_t

* retrigger CI

* add mkldnn lrn (apache#16223)

* [mkldnn-v1.0] Add MKL-DNN Transpose (apache#16250)

* add mkldnn transpose

* using mkldnn_args_map_t instead of std::unordered_map<int, mkldnn::memory>

* [mkldnn-v1.0] Add MKL-DNN softmax (apache#16246)

* add mkldnn softmax

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN FC (apache#16221)

* add mkldnn fc; pass lint; pass mnist training

* add TODO info for future debug

* [mkldnn-v1.0] Add MKL-DNN  deconv (apache#16259)

* add mkldnn deconv

* coding style

* trigger CI

* add mkldnn softmax_output (apache#16222)

* [mkldnn-v1.0] Add MKL-DNN Pooling (apache#16272)

* add mkldnn pooling

* add workaround for mkldnn v1.0 pooling fwd && bwd workspace mismatch

* code clean

* fix lint error

* trigger CI

* trigger CI

* add extra work_space check and fix some typo

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN reshape&flatten&expand_dims (apache#16258)

* Add mkldnn 1.0 support for reshape/flatten/expanddims ops

* improve log & modify definition location of args_map_

* fix comments

* rebase code

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN int8 activation&pooling&flatten (apache#16425)

* Add mkldnn quantized activation/pooling/flatten

* int8 flatten

* [mkldnn-1.0] int8 conv quantize dequantize requantize (apache#16283)

* int8 conv quantize dequantize requantize

Change-Id: Ibd9df97288a95c61d6d85ec3831fd18b626ca283

* Fix lint

* Fix clang build

Change-Id: I9468774d014c852901e4cc3bffabd8a3d8004519

* add mkldnn sum concat (apache#16263)

* [mkldnn-1.0] mkldnn int8 elemwise_add (apache#16454)

* add mkldnn int8 elemwise_add

* add workaround to fix format any issue

* code clean

* upgrade int8 bn to MKLDNN1.0 (apache#16458)

* [mkldnn-v1.0] Fused RNN Op (apache#16420)

* [mkldnn-v1.0] Add MKL-DNN int8 fc (apache#16457)

* Add mkldnn_v1.0 int8 fc

* trigger CI

* trigger CI

* [mkldnn-v1.0] Update enabling flag for MKL dropout (apache#16433)

* use MSHADOW_USE_MKL to determine whther to use mkl optimized dropout

* rebase code

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0 (apache#16466)

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0

* fix lint

* use mkldnn_args_map_t

* update dict usage style

* retrigger CI

* retrigger CI again

* retrigger CI again 2

* [mkldnn-v1.0] Add MKL-DNN slice (apache#16484)

* change slice to mkldnn v1.0

* fix lint

* [mkldnn-1.0] add mkldnn subgraph fc (apache#16468)

* add mkldnn subgraph fc

* code clean

* trigger CI

* [mkldnn-v1.0]enable mkldnn concat (apache#16507)

* enable mkldnn concat

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable mkldnn cpp-test, copy op, concat op (apache#16503)

* [mkldnn-v1.0] Enable mkldnn test, copy op, concat op

Exclude gpu topology via MXNET_USE_CUDA

nit

default format

Remove whitespace

* Unix-GPU Tensor-RT build timeout, re-trigger CI

* [mkldnn-1.0] add skipped case for mkldnn_v1.0 (apache#16470)

* add skipped case for mkldnn_v1.0

* enable mkl quantized testcase

* enable skipped testcase

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-1.0]enable mkldnn elemwise_sum (apache#16521)

* enable mkldnn elemwise_sum

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable more checks for MXNET_USE_MKLDNN (apache#16520)

* open USE_MKLDNN check

* trigger ci

* ci

* [mkldnn-v1.0]Minor fix for leakyrelu compile flag (apache#16519)

* change to MXNET_USE_MKLDNN == 100

* trigger

* remove MKL license (apache#16534)

* change MXNET_USE_MKLDNN from 100 to 1 (apache#16551)

* re-enable unit tests (apache#16565)

* [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu (apache#16545)

Skip `test_rnnrelu_sym`, and add some issue tracking message

Add return

Revert test_rnnrelu_sym to origin

* Add some annotations and log strings, rename mem_desc variables (apache#16609)

* [mkldnn-v1.0]set fc weight layout as mkldnn v0.2x did (apache#16593)

* set fc weight layout as mkldnn v0.2x did

* fix lint

* [mkldnn-v1.0] Upgrade to MKL-DNN v1.0.4 patch release (apache#16592)

* upgrade to mkldnn v1.0.3 patch release

* retrigger ci

* mkldnn v1.0.4 patch release

* [mkldnn-1.0]Rebase to master (apache#16648)

* fixed broken links across multiple files (apache#16581)

* fix missing docs due to git add issues (apache#16496)

* Create SECURITY.md (apache#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (apache#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (apache#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (apache#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (apache#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (apache#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (apache#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (apache#16598)

* Python Docstring Convetion (apache#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (apache#16461)

* Fix for wrong reqs set after switching from training to inference (apache#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (apache#16396)

* [mkldnn-v1.0]rebase with master (apache#16649)

* fixed broken links across multiple files (apache#16581)

* fix missing docs due to git add issues (apache#16496)

* Create SECURITY.md (apache#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (apache#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (apache#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (apache#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (apache#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (apache#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (apache#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (apache#16598)

* Python Docstring Convetion (apache#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (apache#16461)

* Fix for wrong reqs set after switching from training to inference (apache#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (apache#16396)

* Disables test_bulking_operator_gpu due to flakiness (apache#16611)

* C Api for simplebind, fix comment for trigoops, add atol to assert (apache#16585)

* C Api for simplebind, fix comment for trigoops, add atol to assert

* fix build issues

* fix lint and add regression test

* fix indent

* api doc and function name change

* fix lint and add infer shape test

* Imagenet inference to nightly fix (apache#16599)

* split to cd and shell

* comment

* lots of prints

* copy binary at correct location

* remove comments

* add mkl lib

* update docker run build function

* set nvidia docker true to run imagenet inference on GPU

* Revert "set nvidia docker true to run imagenet inference on GPU"

This reverts commit 98f8eef.
As we don't need GPU for compilation.

* Fix python doc build issue (apache#16630)

* pin the pip versions

* remove nbconvert comment

* Faster general take (apache#16615)

* Sped up perf of take op when axis != 0

* Formatting and syntax fixes

* Rename Take to specify axis

* Fix line length lint errors

* [Gluon] Don't serialize shared parameters twice (apache#16582)

Add deduplicate argument (default of False) to save_parameters.

* Fix index overflow bug in einsum (apache#16589)

* fix index overflow

* check index overflow

* fix index overflow in einsum path

* fix indent

* reduce NPY_MAXARGS

* safe accumulate

* Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (apache#16622)

* Move subgraph pass log to verbose=2

* Run CI

* add npx reshape (apache#16640)

* RNNOp only call cuda/cudnn if GPU ctx is requested (apache#16632)

* fix bad encode (apache#16641)

* [Perl] - ndarray to native array conversion fix (apache#16635)

* fixing broken links in multiple files - round 3 (apache#16634)

* add type switch to weight tensor (apache#16543)

* numpy doc enhancement (apache#16637)

* Change NDArray to ndarray for npx ops

Add nonzero

boolean mask supports boolean ndarray

Add argmin op and interoperability test for nonzero

Fix vdot, inner, outter docs

Add nonzero to mx.nd.np

Add docs

Fix

* Fix lint

* Fix

* Fix

* Fix get_constant

* Disable float16 test (apache#16643)

* Fix GetMKLDNNData for delay alloc (apache#16618)

* Fix GetMKLDNNData for delay alloc

* Run CI

* Run CI

* Run CI

* Run CI

* Run CI

Change-Id: I7ac2796e0ee8439c92fd2bd7a70a23a359b76b12

* Revert "[mkldnn-1.0]Rebase to master (apache#16648)"

This reverts commit dea3dd2.

* [mkldnn-v1.0] Minor fix of mkldnn-v1.0 transition (apache#16644)

mk and rm directory in mkldnn.mk

ndarray.cc redundant whitespace

mkldnn_act rename variables of bwd primitives

mkldnn_rnn.cc iterator -> const_iterator

Use != instead of < for iterator in for-loop

Code comment for explaining the reason why excludes the last layer

* [mkldnn-v1.0]rm int8 sum workaround (apache#16623)

* rm int8 sum workaround due to mkldnn lib update

* simple dims asignments in mkldnn_quantized_elemwise_add.cc

* make MKLDNN macro simple for imperative_utils.h (apache#16652)

* fix ci jenkins step groovy (apache#16659)

* Adopt autograd.record() context to RNNOp (apache#16657)

* Use memcopy instead of set_handle when num_layer=0, direction=1 (apache#16663)

* fallback mkldnn fc bwd in imperative mode (apache#16672)

* disable MKLDNN FC backward

* [mkldnn-v1.0] Must reorder and emplace weights for inference primitives (apache#16682)

* add default parameter for mkldnn rnn
yajiedesign pushed a commit to yajiedesign/mxnet that referenced this pull request Nov 6, 2019
* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR apache#16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file
apeforest pushed a commit that referenced this pull request Nov 6, 2019
…6553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request Nov 14, 2019
* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR apache#16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file
ArmageddonKnight pushed a commit to UofT-EcoSystem/incubator-mxnet that referenced this pull request Feb 1, 2020
* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Add CI changes

* Add stage

Fix indentation

* Fix lint

* Change to DEFAULT for C API

* Fix mxnet_unit_tests path

* export correct LD_LIBRARY_PATH

* Add cpp include dirs

* Build test with USE_CPP_PACKAGE

* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Merge

* change mkldnn lib name

* Add static_alloc, static_Shape support

* Address review comments

* Make GetCachedOpThreadSafeState similar to cached_op

* Address review comments: comments for locking strategy

* multithreaded inference tutorial

* [Estimator] handle composite metrics in estimator (apache#16676)

* handle composite metrics in estimator

* fix composite metric case in handlers

* remove unused import

* [Estimator] refactor estimator to allow overriding evaluate/fit of a batch (apache#16678)

* refactor estimator to allow overriding evaluate/fit of a batch

* add doc to explain call structure and how to override

* fix and doc

* Pointwise fusion for GPU (apache#15167)

* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR apache#16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file

* fix install dir (apache#16690)

* [numpy] add numpy operator : append (apache#16564)

* add operator : append ; fix op concatenate when axis = None

* pylint disable

remove mistake

disable pylint

* Initializer.__eq__ (apache#16680)

* fix binary dependencies in CD and nightly (apache#16693)

* [MKL-DNN] Add mxnet mkldnn cmake tutorial (apache#16688)

* add mxnet mkldnn cmake instruction

* imporve doc

* OMP->OpenMP

* Revert "[MKLDNN]Fix reorder2default (apache#16602)" (apache#16697)

This reverts commit dd4eaf5.

* [Estimator] refactor estimator and clarify docs (apache#16694)

* refactor estimator and clarify docs

* fix info message and test

* clean up after releasing logging handler

* Eliminate common expressions (apache#15657)

* Eliminate common expressions from a graph

* Guarding against optimizing out stateful ops and ops that require
resource

* Fix lint

* Added THasDeterministicOutput to multiple ops

* DDebug eliminate common expr

* Added test

* Expose get_optimized_symbol

* Fix

* Fix 2

* Add doc to the Python call

* Add env var MXNET_ELIMINATE_COMMON_EXPR, default true

* Add comments, improve readability of eliminate_common_expr_pass.cc

* Expand testing

* Lower priority of THasDeterministicOutput attr for equal Node test

* Change mx.gpu() to mx.cpu() in tests

* Skip CSE test on Windows (as env variable setting during test does not work there)

* Add missing import sys

* Add missing import logging

* Backport of apache#16711, apache#16737, apache#16408 to 1.6 branch (apache#16763)

* support mixed-precision true_divide (apache#16711)

* [MKLDNN] use dim_t instead of int in slice/transpose operators (apache#16737)

* use dim_t instead of int

* fix same issue in pooling

* rebase code

* trigger CI

* Add MXNet Ops for fast multihead attention (apache#16408)

* add MXNet Ops for fast multihead attention

* add cutlass as 3rdparty dependency

* add cutlass to compilation flags

* remove all cutlass stuff

* add better error message and description and remove cutlass from compilation flags

* change credit for the approach since the code have changed

* fix typos

* correct another typo

* Add all the cuda/cublas helper functions

* remove tests using kAddTo

* only use cublasStridedBatchedGemm if CUDA >= 9.1

* add equivalent mxnet code in description of mha ops

* remove a wrong copy-paste

* add _contrib for namespace and add GPU only on description

* add warning in bwd_ignore_zero_init description, also test with fp32

* add error return if bwd_ignore_zero_init is used without MXNET_EXEC_ENABLE_ADDTO

* remove std::move for clang

* remove bwd_ignore_zero_init flag

* remove bwd_ignore_zero_init in test_operator_gpu.py

* fix typo

* fix another typo

* Removed unrelated test

* Add example and documentation for multi threaded inference

* Add LICENSE

* Add get_model.py

* Add license for README

* Refactor cached op and cached op threadsafe

* Add limitation

* Add tests for naive engine

* Add latest test changes

* Thread Safety tests in NaiveEngine mode

* Thread Safety tests update

* Update thread safety tests, add unsupported use cases

* Changes to doc and refactor

* Fix todo owner, indentation and mx_float->float

* Refactor cached op code, remove num_threads arg from example

* Fix lint

* Fix warning

* Add back cython, required for unix-gpu build

* Fix for windows

* Add bulking support for thread safe cached op version

* Add support for subgraph testing

* import mxnet before calling get_backend_symbol

* Fix symbol json name

* Refactor DynamicForward

* Add comments

* Add DMLC_ATTRIBUTE_UNUSED

* Fix use_naive_run issue

* Fix lint

* Revert unittest_cpp to old test since it doesnt test thread safety

* Fix doc

Co-authored-by: Sheng Zha <[email protected]>
Co-authored-by: Przemyslaw Tredak <[email protected]>
Co-authored-by: Tao Lv <[email protected]>
Co-authored-by: JiangZhaoh <[email protected]>
Co-authored-by: Leonard Lausen <[email protected]>
Co-authored-by: Xinyu Chen <[email protected]>
Co-authored-by: Zhennan Qin <[email protected]>
zheyuye pushed a commit to zheyuye/incubator-mxnet that referenced this pull request Feb 19, 2020
* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Add CI changes

* Add stage

Fix indentation

* Fix lint

* Change to DEFAULT for C API

* Fix mxnet_unit_tests path

* export correct LD_LIBRARY_PATH

* Add cpp include dirs

* Build test with USE_CPP_PACKAGE

* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Merge

* change mkldnn lib name

* Add static_alloc, static_Shape support

* Address review comments

* Make GetCachedOpThreadSafeState similar to cached_op

* Address review comments: comments for locking strategy

* multithreaded inference tutorial

* [Estimator] handle composite metrics in estimator (apache#16676)

* handle composite metrics in estimator

* fix composite metric case in handlers

* remove unused import

* [Estimator] refactor estimator to allow overriding evaluate/fit of a batch (apache#16678)

* refactor estimator to allow overriding evaluate/fit of a batch

* add doc to explain call structure and how to override

* fix and doc

* Pointwise fusion for GPU (apache#15167)

* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR apache#16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file

* fix install dir (apache#16690)

* [numpy] add numpy operator : append (apache#16564)

* add operator : append ; fix op concatenate when axis = None

* pylint disable

remove mistake

disable pylint

* Initializer.__eq__ (apache#16680)

* fix binary dependencies in CD and nightly (apache#16693)

* [MKL-DNN] Add mxnet mkldnn cmake tutorial (apache#16688)

* add mxnet mkldnn cmake instruction

* imporve doc

* OMP->OpenMP

* Revert "[MKLDNN]Fix reorder2default (apache#16602)" (apache#16697)

This reverts commit dd4eaf5.

* [Estimator] refactor estimator and clarify docs (apache#16694)

* refactor estimator and clarify docs

* fix info message and test

* clean up after releasing logging handler

* Eliminate common expressions (apache#15657)

* Eliminate common expressions from a graph

* Guarding against optimizing out stateful ops and ops that require
resource

* Fix lint

* Added THasDeterministicOutput to multiple ops

* DDebug eliminate common expr

* Added test

* Expose get_optimized_symbol

* Fix

* Fix 2

* Add doc to the Python call

* Add env var MXNET_ELIMINATE_COMMON_EXPR, default true

* Add comments, improve readability of eliminate_common_expr_pass.cc

* Expand testing

* Lower priority of THasDeterministicOutput attr for equal Node test

* Change mx.gpu() to mx.cpu() in tests

* Skip CSE test on Windows (as env variable setting during test does not work there)

* Add missing import sys

* Add missing import logging

* Backport of apache#16711, apache#16737, apache#16408 to 1.6 branch (apache#16763)

* support mixed-precision true_divide (apache#16711)

* [MKLDNN] use dim_t instead of int in slice/transpose operators (apache#16737)

* use dim_t instead of int

* fix same issue in pooling

* rebase code

* trigger CI

* Add MXNet Ops for fast multihead attention (apache#16408)

* add MXNet Ops for fast multihead attention

* add cutlass as 3rdparty dependency

* add cutlass to compilation flags

* remove all cutlass stuff

* add better error message and description and remove cutlass from compilation flags

* change credit for the approach since the code have changed

* fix typos

* correct another typo

* Add all the cuda/cublas helper functions

* remove tests using kAddTo

* only use cublasStridedBatchedGemm if CUDA >= 9.1

* add equivalent mxnet code in description of mha ops

* remove a wrong copy-paste

* add _contrib for namespace and add GPU only on description

* add warning in bwd_ignore_zero_init description, also test with fp32

* add error return if bwd_ignore_zero_init is used without MXNET_EXEC_ENABLE_ADDTO

* remove std::move for clang

* remove bwd_ignore_zero_init flag

* remove bwd_ignore_zero_init in test_operator_gpu.py

* fix typo

* fix another typo

* Removed unrelated test

* Add example and documentation for multi threaded inference

* Add LICENSE

* Add get_model.py

* Add license for README

* Refactor cached op and cached op threadsafe

* Add limitation

* Add tests for naive engine

* Add latest test changes

* Thread Safety tests in NaiveEngine mode

* Thread Safety tests update

* Update thread safety tests, add unsupported use cases

* Changes to doc and refactor

* Fix todo owner, indentation and mx_float->float

* Refactor cached op code, remove num_threads arg from example

* Fix lint

* Fix warning

* Add back cython, required for unix-gpu build

* Fix for windows

* Add bulking support for thread safe cached op version

* Add support for subgraph testing

* import mxnet before calling get_backend_symbol

* Fix symbol json name

* Refactor DynamicForward

* Add comments

* Add DMLC_ATTRIBUTE_UNUSED

* Fix use_naive_run issue

* Fix lint

* Revert unittest_cpp to old test since it doesnt test thread safety

* Fix doc

Co-authored-by: Sheng Zha <[email protected]>
Co-authored-by: Przemyslaw Tredak <[email protected]>
Co-authored-by: Tao Lv <[email protected]>
Co-authored-by: JiangZhaoh <[email protected]>
Co-authored-by: Leonard Lausen <[email protected]>
Co-authored-by: Xinyu Chen <[email protected]>
Co-authored-by: Zhennan Qin <[email protected]>
rondogency pushed a commit to rondogency/incubator-mxnet that referenced this pull request Jul 2, 2020
* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Add CI changes

* Add stage

Fix indentation

* Fix lint

* Change to DEFAULT for C API

* Fix mxnet_unit_tests path

* export correct LD_LIBRARY_PATH

* Add cpp include dirs

* Build test with USE_CPP_PACKAGE

* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests

* Fix download cmd in runtime_functions

* Merge

* change mkldnn lib name

* Add static_alloc, static_Shape support

* Address review comments

* Make GetCachedOpThreadSafeState similar to cached_op

* Address review comments: comments for locking strategy

* multithreaded inference tutorial

* [Estimator] handle composite metrics in estimator (apache#16676)

* handle composite metrics in estimator

* fix composite metric case in handlers

* remove unused import

* [Estimator] refactor estimator to allow overriding evaluate/fit of a batch (apache#16678)

* refactor estimator to allow overriding evaluate/fit of a batch

* add doc to explain call structure and how to override

* fix and doc

* Pointwise fusion for GPU (apache#15167)

* Beginning of RTC of pointwise ops

* Code generation from the given JSON

* add initial simple_partition_pass and use it for pointwise fusion

* fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code

* Fixes

* Adding support for attribute inference for backward nodes when fusing

* keep proper input ordering for fused Op

* instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph

* Fuse backward

* fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch

* excluse forward node fusion during the fusion of the nodes in the backward graph

* Dealing with fused backward nodes inferattr

* use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph

* Adding support for other reqs in codegen

* Fix

* Cleaning

* Change the TVM submodule

* More cleaning

* Making linter happy

* Do fusion only if default context is GPU

* Fixes for tests
Add powerscalar and rpowerscalar, fix return type of zero and one
Cleaning, fixing lint
Go back to proper TVM submodule

* Fix the TVM commit

* Fix lint

* Guard fusion with MXNET_USE_CUDA

* Fix

* Fix clang-tidy

* Add erf and erfinv backward

* Gluon support for fusion

* Cleaning

* Cleaning and allow shape/type change in FusedOp

* Fixing Gluon bugs

* Fixing after rebase

* Fixing race condition and guarding against races when using NVRTC

* Cleaning and renaming FusedOp to _FusedOp

* Going easy on Windows compiler

* Disable fusion on Windows for now

* Refactor InferAttr and InferShapeAttr

* Added slice and half2 support to FusedOp

* Fix lint errors

* Added multiple types support for vector loading/storing

* add slice fusion when it's at the beginning of subgraphs

* Removed constant ndim assumption in fused op

* Fix memory alignment issue in slice for FusedOp

* Fixes

* Fix lint errors

* Do not include cuda_fp16.h

* Refactor fused op op lists

* Make linter happy

* Changes from review

* Fixes after rebase

* Expand FusedOp support for slice

* Fix for fp16 _zeros and _ones

* Fix

* Moving aux functions to unnamed namespace and detail namespace -> fusion
namespace

* Disabling fusion if it alters topological order of inputs

* Print code only when env variable is set

* Fix

* Fix lint and 2 tests that specify the same names for multiple inputs

* Fixes from review and disabling fusion of slice with non-default step

* Add amp_cast to fusion, fixes

* Add amp_multicast and its backward to the list of support ops

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Apply wording suggestions from code review

Co-Authored-By: Aaron Markham <[email protected]>

* Make clearer comment

* Adding punctuation and capitalization to \brief descriptions

* Fix

* Fix

* Add backward_cast to fusion

* Adding unittests for fusion. Fix for erfinv_grad

* Adding slice ops and add_n to tests

* Fixes from review

* Setting inplace option

* Fix lint

* Storing double in half

* Retrigger CI

* Slight relaxing of the relative tolerance in the test

* Move the env variable check to the end

* Fix a race condition between InferShape and scheduled Forward

* Fix flakey test_fusion test involving fp32 erfinv op.

* Fix from review

* Added broadcast_like and slice_like to fused op

* Minor fix and cleanup

* Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like

* Added axes support to slice_like

* Added axis support to broadcast_like

* Add fast_load_slice function to fused op code

* Added runtime switch for choosing fast and slow slice kernel

* Fix lint and warning

* Going easy on Windows compiler (again)

* Fix slice_like

* Debug broadcast_like fusion

* Fix lint

* Fix lint

* Trigger CI

* Get rid of the initializer list

* Fix backward calls with different gradient type

* avoid cycle when adding node specific for inputs of subgraph for pointwise fusion

* Fix lint

* Add namespace to the fusion implementations

* Set launch bounds on the fused kernel

* Fix NumPy tests

* Test showcasing an issue fixed in PR apache#16553

* Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b)

Fix lint errors

Fix lint

* Fix a bug in cycle detection for inputs only op in pointwise fusion

* Add comments to simple_partition_pass.h file

* fix install dir (apache#16690)

* [numpy] add numpy operator : append (apache#16564)

* add operator : append ; fix op concatenate when axis = None

* pylint disable

remove mistake

disable pylint

* Initializer.__eq__ (apache#16680)

* fix binary dependencies in CD and nightly (apache#16693)

* [MKL-DNN] Add mxnet mkldnn cmake tutorial (apache#16688)

* add mxnet mkldnn cmake instruction

* imporve doc

* OMP->OpenMP

* Revert "[MKLDNN]Fix reorder2default (apache#16602)" (apache#16697)

This reverts commit dd4eaf5.

* [Estimator] refactor estimator and clarify docs (apache#16694)

* refactor estimator and clarify docs

* fix info message and test

* clean up after releasing logging handler

* Eliminate common expressions (apache#15657)

* Eliminate common expressions from a graph

* Guarding against optimizing out stateful ops and ops that require
resource

* Fix lint

* Added THasDeterministicOutput to multiple ops

* DDebug eliminate common expr

* Added test

* Expose get_optimized_symbol

* Fix

* Fix 2

* Add doc to the Python call

* Add env var MXNET_ELIMINATE_COMMON_EXPR, default true

* Add comments, improve readability of eliminate_common_expr_pass.cc

* Expand testing

* Lower priority of THasDeterministicOutput attr for equal Node test

* Change mx.gpu() to mx.cpu() in tests

* Skip CSE test on Windows (as env variable setting during test does not work there)

* Add missing import sys

* Add missing import logging

* Backport of apache#16711, apache#16737, apache#16408 to 1.6 branch (apache#16763)

* support mixed-precision true_divide (apache#16711)

* [MKLDNN] use dim_t instead of int in slice/transpose operators (apache#16737)

* use dim_t instead of int

* fix same issue in pooling

* rebase code

* trigger CI

* Add MXNet Ops for fast multihead attention (apache#16408)

* add MXNet Ops for fast multihead attention

* add cutlass as 3rdparty dependency

* add cutlass to compilation flags

* remove all cutlass stuff

* add better error message and description and remove cutlass from compilation flags

* change credit for the approach since the code have changed

* fix typos

* correct another typo

* Add all the cuda/cublas helper functions

* remove tests using kAddTo

* only use cublasStridedBatchedGemm if CUDA >= 9.1

* add equivalent mxnet code in description of mha ops

* remove a wrong copy-paste

* add _contrib for namespace and add GPU only on description

* add warning in bwd_ignore_zero_init description, also test with fp32

* add error return if bwd_ignore_zero_init is used without MXNET_EXEC_ENABLE_ADDTO

* remove std::move for clang

* remove bwd_ignore_zero_init flag

* remove bwd_ignore_zero_init in test_operator_gpu.py

* fix typo

* fix another typo

* Removed unrelated test

* Add example and documentation for multi threaded inference

* Add LICENSE

* Add get_model.py

* Add license for README

* Refactor cached op and cached op threadsafe

* Add limitation

* Add tests for naive engine

* Add latest test changes

* Thread Safety tests in NaiveEngine mode

* Thread Safety tests update

* Update thread safety tests, add unsupported use cases

* Changes to doc and refactor

* Fix todo owner, indentation and mx_float->float

* Refactor cached op code, remove num_threads arg from example

* Fix lint

* Fix warning

* Add back cython, required for unix-gpu build

* Fix for windows

* Add bulking support for thread safe cached op version

* Add support for subgraph testing

* import mxnet before calling get_backend_symbol

* Fix symbol json name

* Refactor DynamicForward

* Add comments

* Add DMLC_ATTRIBUTE_UNUSED

* Fix use_naive_run issue

* Fix lint

* Revert unittest_cpp to old test since it doesnt test thread safety

* Fix doc

Co-authored-by: Sheng Zha <[email protected]>
Co-authored-by: Przemyslaw Tredak <[email protected]>
Co-authored-by: Tao Lv <[email protected]>
Co-authored-by: JiangZhaoh <[email protected]>
Co-authored-by: Leonard Lausen <[email protected]>
Co-authored-by: Xinyu Chen <[email protected]>
Co-authored-by: Zhennan Qin <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants